144 research outputs found

    Hands-Off Therapist Robot Behavior Adaptation to User Personality for Post-Stroke Rehabilitation Therapy

    Get PDF
    This paper describes a hands-off therapist robot that monitors, assists, encourages, and socially interacts with post-stroke users in the process of rehabilitation exercises. We developed a behavior adaptation system that takes advantage of the users introversion-extroversion personality trait and the number of exercises performed in order to adjust its social interaction parameters (e.g., interaction distances/proxemics, speed, and vocal content) toward a customized post-stroke rehabilitation therapy. The experimental results demonstrate the robot's autonomous behavior adaptation to the user's personality and the resulting user improvements of the exercise task performance

    Encouraging User Autonomy through Robot-Mediated Intervention

    Full text link
    In this paper, we focus on the question of promoting user autonomy at a healthcare task. During a robot-mediated intervention, socially assistive robot should seek to encourage users to learn skills and behaviors that will generalize and persist beyond the duration of the intervention. Treating a care-receiver as an apprentice rather than a dependent results in greater proficiency at self-management [2]. This philosophy must be incorporated into the design and implementation of robot-mediated healthcare interventions in order for them to be accepted by real-world users. Our approach toward encouraging user autonomy and promoting generalized skill learning was to model the occupational therapy technique of graded cueing [1]. Graded cueing involves giving a patient the minimum required feedback while guiding them through a task. This method promotes generalized skill learnin

    Where am i? scene recognition for mobile robots using audio features

    Get PDF
    Automatic recognition of unstructured environments is an important problem for mobile robots. We focus on using audio features to recognize different auditory environments, where they are characterized by different types of sounds. The use of audio information provides a complementary means of scene recognition that can effectively augment visual information. In particular, audio can be used toward both the analysis and characterization of the environment at a higher level of abstraction. We begin our investigation of recognizing different auditory environments with the audio information. In this paper, we utilize low-level audio features from a mobile robot and investigate using highlevel features based on spectral analysis for scene characterization, and a recognition system was built to discriminate between different environments based on these audio features found. 1

    Quality-Diversity Generative Sampling for Learning with Synthetic Data

    Full text link
    Generative models can serve as surrogates for some real data sources by creating synthetic training datasets, but in doing so they may transfer biases to downstream tasks. We focus on protecting quality and diversity when generating synthetic training datasets. We propose quality-diversity generative sampling (QDGS), a framework for sampling data uniformly across a user-defined measure space, despite the data coming from a biased generator. QDGS is a model-agnostic framework that uses prompt guidance to optimize a quality objective across measures of diversity for synthetically generated data, without fine-tuning the generative model. Using balanced synthetic datasets generated by QDGS, we first debias classifiers trained on color-biased shape datasets as a proof-of-concept. By applying QDGS to facial data synthesis, we prompt for desired semantic concepts, such as skin tone and age, to create an intersectional dataset with a combined blend of visual features. Leveraging this balanced data for training classifiers improves fairness while maintaining accuracy on facial recognition benchmarks. Code available at: https://github.com/Cylumn/qd-generative-sampling.Comment: Accepted at AAAI 2024; 7 pages main, 12 pages total, 9 figure

    Evaluating Temporal Patterns in Applied Infant Affect Recognition

    Full text link
    Agents must monitor their partners' affective states continuously in order to understand and engage in social interactions. However, methods for evaluating affect recognition do not account for changes in classification performance that may occur during occlusions or transitions between affective states. This paper addresses temporal patterns in affect classification performance in the context of an infant-robot interaction, where infants' affective states contribute to their ability to participate in a therapeutic leg movement activity. To support robustness to facial occlusions in video recordings, we trained infant affect recognition classifiers using both facial and body features. Next, we conducted an in-depth analysis of our best-performing models to evaluate how performance changed over time as the models encountered missing data and changing infant affect. During time windows when features were extracted with high confidence, a unimodal model trained on facial features achieved the same optimal performance as multimodal models trained on both facial and body features. However, multimodal models outperformed unimodal models when evaluated on the entire dataset. Additionally, model performance was weakest when predicting an affective state transition and improved after multiple predictions of the same affective state. These findings emphasize the benefits of incorporating body features in continuous affect recognition for infants. Our work highlights the importance of evaluating variability in model performance both over time and in the presence of missing data when applying affect recognition to social interactions.Comment: 8 pages, 6 figures, 10th International Conference on Affective Computing and Intelligent Interaction (ACII 2022
    corecore